专利摘要:
A detector assembly (100) is provided which includes a semiconductor detector (110), a plurality of pixelized anodes (114), and at least one processing unit (120). The various pixelized anodes (114) are disposed on a surface (112) of the semiconductor detector (110). Each pixelized anode is configured to generate a primary signal (32) in response to reception of a photon (116) and to generate at least one secondary signal (34; 35; 36; 37) in response to an induced charge caused by receiving a photon (116) by at least one surrounding anode (14). The at least one processing unit is operatively coupled to the pixelated anodes (114) and is configured to acquire a primary signal (32) from one (114a) of the anodes (114) in response to receiving a photon (116); acquiring at least one secondary signal (34; 35; 36; 37) from at least one neighboring pixel; and determining an interaction depth (Z0; Z1; Z2; Z3; Z4) in the semiconductor detector (110) for reception of the photon (116) by that anode (114a) from the anodes (114) using the 'at least one secondary signal (34; 35; 36; 37). Figure for the abstract: Fig 10
公开号:FR3084933A1
申请号:FR1909135
申请日:2019-08-12
公开日:2020-02-14
发明作者:Arie Shahar;Yaron Glazer;Moshe Cohen-Erner;Avishai Ofan
申请人:General Electric Co;
IPC主号:
专利说明:

Description
Title of the invention: Systems and methods for determining depth of interaction
BACKGROUND OF THE INVENTION The object described here generally relates to an apparatus and methods for diagnostic medical imaging, such as nuclear medical imaging (NM).
In NM imaging, for example, systems with multiple detectors or detection heads can be used to obtain an image of a subject, for example to scan a region of interest. For example, detectors can be positioned adjacent to the subject to acquire NM data, which is used to generate a three-dimensional (3D) image of the subject.
[0003] The imaging detectors can be used to detect the reception of photons from an object (for example, a human patient to whom a radioactive tracer has been administered) by the imaging detector. The depth of interaction (DOI) or location along the thickness of a detector at which photons are detected can affect the strength of the signals generated by the detector in response to photons and can be used to determine the number and location of events detected. As a result, the DOI can be used to correct detector signals to improve the resolution and sensitivity of the detector energy. However, conventional approaches to determining DOI use signals from a cathode, requiring additional hardware and assembly complexity to use the hardware to collect and process cathode signals. In addition, cathodes tend to be relatively large and to produce relatively noisy signals, reducing the accuracy and efficiency of using signals from cathodes.
Brief Description of the Invention In one embodiment, a radiation detector assembly is proposed which includes a semiconductor detector, several pixelated anodes, and at least one processor. The semiconductor detector has a surface. The different pixelated anodes are arranged on the surface. Each pixelated anode is configured to generate a primary signal in response to reception of a photon by the pixelated anode and to generate at least one secondary signal in response to an induced charge caused by the reception of a photon by at least one surrounding anode. The at least one processor is operatively coupled to the pixelated anodes and is configured to acquire a primary signal from one of the anodes in response to receipt of a photon by this anode from the anodes; acquiring at least one signal based on at least one pixel neighboring this anode of the anodes in response to an induced charge caused by the reception of the photon by this anode of the anodes; and determining an interaction depth in the semiconductor detector for the reception of the photon by this anode of the anodes using the at least one secondary signal.
In another embodiment, an imaging method using a semiconductor detector is proposed. The semiconductor detector has a surface with several pixelated anodes disposed thereon. Each pixelated anode is configured to generate a primary signal in response to reception of a photon by the pixelated anode and to generate at least one secondary signal in response to an induced charge caused by the reception of a photon by at least one surrounding anode. The method includes acquiring a primary signal from one of the anodes in response to reception of a photon by that anode from the anodes, and acquiring at least one secondary signal from at at least one pixel neighboring this anode of the anodes in response to an induced charge caused by the reception of the photon by this anode of the anodes. The method also includes determining an interaction depth in the semiconductor detector for receiving the photon by this anode from the anodes using the at least one secondary signal.
In another embodiment, a method includes providing a semiconductor detector having a surface with a plurality of pixelated anodes disposed thereon. Each pixelated anode is configured to generate a primary signal in response to reception of a photon by the pixelated anode and to generate at least one secondary signal in response to an induced charge caused by the reception of a photon by at least one adjacent anode. The method also includes the functional coupling of the pixelated anodes to at least one processor. Additionally, the method includes providing a calibrated radiation supply at different depths along a side wall of the semiconductor detector, wherein the pixelated anodes generate primary signals and secondary signals in response to the calibrated radiation supply. Also, the method includes the acquisition, with said at least one processor, of the primary signals and of the secondary signals from the pixelized anodes. The method further includes determining corresponding negative values of total induced loads for each of the different depths, and determining calibration information as a function of the negative values of total induced loads for each of the different depths.
Brief description of the drawings [0007] [fig-1] represents a representation of the weighting potentials of a detector having a pixel polarized by a voltage potential.
[Fig-2] represents four events within the detector of Figure 1.
[Fig.3] represents corresponding induced charges for the four events of Figure 2.
[Fig.4] represents five groups of events under a primary or collection pixel located at five different DOIs.
[Fig.5] represents the resulting uncollected or secondary signals for the events located at Z0 in Figure 4.
[Fig.6] represents the resulting uncollected or secondary signals for the events located at ZI in Figure 4.
[Fig.7] represents the resulting uncollected or secondary signals for the events located at Z2 in Figure 4.
[Fig-8] represents the resulting uncollected or secondary signals for the events located at Z3 in Figure 4.
[Fig.9] represents the resulting uncollected or secondary signals for the events located at Z4 in Figure 4.
[Fig.10] shows a calibration system in accordance with various embodiments.
[Fig.l 1] provides a schematic view of a radiation detector assembly according to various embodiments.
[Fig.12] provides a flow diagram of a method according to various embodiments.
[Fig. 13] provides a flow diagram of a method according to various embodiments.
[Fig. 14] provides a schematic view of an imaging system according to various embodiments.
[Fig. 15] provides a schematic view of an imaging system according to various embodiments.
Detailed Description of the Invention The following detailed description of certain embodiments will be better understood when read in conjunction with the accompanying drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between the hardware circuits. For example, one or more of the functional blocks (for example, processors or memories) can be implemented in a single hardware element (for example, a universal signal processor or a block of random access memory, hard drive, or the like ) or multiple material elements. Similarly, the programs can be stand-alone programs, can be incorporated as subroutines in an operating system, can be functions in an installed software package, and the like. It should be understood that the various embodiments are not limited to the arrangements and the means used shown in the drawings.
As used herein, the terms "system", "unit" or "module" may include a hardware and / or software system that operates to perform one or more functions. For example, a module, unit, or system may include a computer processor, controller, or other logic-based device that performs operations based on instructions stored on tangible, non-computer readable storage media. transient, such as computer memory. Alternatively, a module, unit or system can include a wired device that performs operations based on wired logic of the device. Various modules or units represented in the appended figures can represent the material which functions according to software or wired instructions, the software which directs the material to carry out the operations, or a combination of these.
"Systems", "units" or "modules" can include or represent hardware and associated instructions (for example, software stored on a tangible, non-transient computer-readable storage medium, such as a hard disk computer, ROM, RAM, or the like) that perform one or more of the operations described herein. The hardware can include electronic circuits that include and / or are connected to one or more logic-based devices, such as microprocessors, processors, controllers, or the like. These devices may be ready-to-use devices which are appropriately programmed or which are appropriately instructed to perform operations described herein from the instructions described above. In addition or alternatively, one or more of these devices can be wired with logic circuits to perform these operations.
As used here, an element or a step stated in the singular and preceded by the word "a / une" must be understood as not excluding the plural of said elements or said steps, unless such an exclusion is explicitly stated. In addition, references to "one embodiment" should not be interpreted as excluding the existence of additional embodiments which also incorporate the features cited. In addition, unless explicitly stated otherwise, embodiments "comprising" or "having" an element or a plurality of elements having a particular property may include additional elements not having this property.
Various embodiments provide systems and methods for improving the sensitivity and / or resolution of image acquisition energy, for example in nuclear medical imaging (NM) applications. In various embodiments, measurements of adjacent uncollected (or induced) transient signals are used to determine an interaction depth (DOI) of corresponding events in a detector. Note that the same measurements of the uncollected adjacent transient signals can also be used to determine locations of corresponding subpixels for events.
Generally, various embodiments provide methods and / or systems for measuring the negative value of induced signals and for deducing or determining the corresponding DOI as a function of the negative value. For some DOI values (for example, DOIs not located near an anode), all events with the same DOI produce approximately the same negative value for uncollected induced signals, regardless of their lateral position (the lateral position being defined as the coordinates x, y when the DOI is measured along an axis z). Therefore, various embodiments use a measured value of induced signals (for example, a measured negative value of uncollected induced signals, which can also be used for determining location of sub-pixels) to infer or determine the DOI as well as to provide a 3D positioning of the events causing the induced signals not collected.
A technical effect provided by various embodiments includes increased sensitivity and / or energy resolution of a detection system, such as an NM imaging detection system. A technical effect of various embodiments includes improved image quality. A technical effect of various embodiments includes reduced hardware and / or processing complexity associated with determining DOI through the elimination of the use of signals from a cathode.
Before tackling the specific aspects of particular embodiments, certain aspects of detector operation are discussed. FIG. 1 represents a representation 10 of weighting potentials for a detector 11 having a pixelated anode 12 polarized by a potential of 1 volt. The neighboring pixelized anodes 13 are not polarized, or are at a ground potential of 0 volts. It will be noted that the representation of a pixelized anode given at a voltage while the neighboring pixelated anodes are not polarized in relation to various examples here is provided for reasons of clarity of illustration and ease of representation; however, in practice, each pixelized anode of a detector can be polarized by a similar voltage. In the example shown in Figure 1, the cathode 14 is grounded at 0 volts. The solid curves 15 show the lines of the electric field, the dotted curves 16 showing the lines of equipotential. The equipotential lines are perpendicular to the electric field lines at the points where the lines cross.
The weighting potentials of Figure 1 are represented in accordance with the Shockley-Ramo theorem. According to this theorem, the induced current produced by the weighting potential is described by i = qE * V = qE * V * cos (a), where i is the induced current, q is the electronic charge, and E * V is the scalar product between the electric field E of the weighting potential and the speed V of the electron, and a is the angle between the vectors E and V.
Figures 2 and 3 show the occurrence of events in various locations of the detector 11 and the resulting induced charges. Figure 2 shows four events within the detector 11 of Figure 1, and Figure 3 shows the corresponding induced charges.
Figure 2 shows four events - the event 21 starting at a depth Z o , the event 22 starting at a depth Z b the event 23 starting at a depth Z 2 , and the event 24 starting at a depth Z 3 . Each of the events moves along a trajectory starting at Xi and ending at the anode of the primary or collection pixel 25 (the anode which collects the events). The induced charge not collected on the adjacent pixel (or non-collecting pixel, in this case the pixel 12, which is immediately adjacent to the collecting pixel 25 in the example illustrated) is the integral over time (or over the distance ) of the current given by the relation discussed above (i = qE * V = qE * V * cos (a)). E is the weighting potential field of the neighboring non-collecting pixel (pixel 12 in this example). Two ranges are represented on the depth D of the detector - a first range I, and a second range II.
In range I, the vector of the field has a component directed downwards. Consequently, the induced charge is positive on range I. On range II, the vector of the field has a component which is directed upwards. Consequently, the induced charge is negative in range II. The event 21 at depth Z o begins at cathode 14 and consequently the associated load moves over the entire length of track I and track II. The other events start far from the cathode and consequently the linked charges do not move over the entire depth of range I. In addition, the event 24 at depth Z 3 begins within the limit of range II, closer to the pixelated anodes than is the limit of range II. Consequently, the load linked to the event 24 at the depth Z 3 does not move over the entire depth of the range II.
Figure 3 represents the resulting signals corresponding to the events of Figure 2. Namely, the signal 32 represents the collection or primary signal resulting in the collection pixel 25. The signal 34 represents the non-collection signal of the the pixelated anode 12 resulting from the event 21 starting at Z o , the signal 35 represents the non-collection signal from the pixelated anode 12 resulting from the event 22 starting at Zi, the signal 36 represents the non-collection signal from the pixelated anode 12 resulting from the event 23 starting at Z 2 , and the signal 37 represents the non-collection signal from the pixelated anode 12 resulting from the event 24 starting at Z 3 .
As seen in Figure 3, for the event 21 starting at Z o (the event occurring at the cathode 14 and moving over the entire range of the two ranges I and II), the charge total induced in ranges I and II is zero, since the positive induced charge in range I and the negative charge in range II are equal and compensate each other over the entire depth (for example, at the anodes where Z = D, where D is the thickness of the detector 11).
For event 22 starting at Zi (which is far from the cathode), the total induced charge is negative, since the positive induced charge in page I is less than that of event 21, given that the load of event 22 does not cover the entire depth of range I. Similarly, the total induced load of event 23 is more negative than the total induced load of event 22, and the induced load total of event 24 is more negative than the total induced load of event 23. This can be represented by [Qo = O]> [Qi <0]> [Q 2 <0]> [Q 3 <0] , where Qo is the total induced charge for event 21 starting at Zo, where Qi is the total induced charge for event 22 starting at Z b where Q 2 is the total induced charge for event 23 starting at Z 2 , and where Q 3 is the total induced charge for event 24 starting at Z 3 , Consequently, as we see or in Figure 3, the closer an event is to the pixelated anodes (or the further away from the cathode), the more the signal from an uncollected anode will tend to be negative.
Fa Figure 4 represents five groups of events under a primary pixel 42 (the collection pixel which generates a primary signal) located at five different DOIs: Z o , Z b Z 2 , Z 3 , Z 4. Each group includes three events located at different X coordinates (namely X b X 2 and X 3 ) for a particular depth. These trajectories for each event moving towards the primary anode 42 (for example, the anode 25 of FIG. 2) are represented diagrammatically in FIG. 4. These events move in the weighted potential and the electric field of the neighboring pixel of non-collection 44 (for example, the anode 12 of Figures 1 and 2) on which the non-collected charge is induced, which leads to a secondary signal generated by the adjacent non-collection pixel 44.
These resulting uncollected or secondary induced signals produced by each event are shown in Figures 5 to 9. Fa Figure 5 includes a graph 50 which represents the resulting uncollected or secondary signals for events located at Z o . As seen in Figure 5, despite the fact that the group of events at Z o have different lateral locations X b X 2 , and X 3 , they all produce the same signal of total uncollected induced charge which is equal to zero. It can be noted that all the events at Z o start at the cathode. The curve 51 in Figure 5 is the primary collected signal at the primary anode 42, and is shown in Figure 5 to help illustrate the differences between the primary signal and the secondary induced signal in terms of amplitude and form.
Figure 6 includes a graph 60 which represents the resulting uncollected or secondary signals for the events located at Z,. As seen in Figure 6, despite the fact that the group of events at Zi have different lateral locations X b X 2 , and X 3 , they all produce the same signal (or almost the same) of non-induced charge total collected which is negative. It can be noted that all the events at Zi start at a short distance from the cathode and consequently have a negative induced charge of a relatively small magnitude.
Figure 7 includes a graph 70 represents the resulting uncollected or secondary signals for the events located at Z 2 . As seen in Figure 7, despite the fact that the group of events at Z 2 have different lateral locations X b X 2 , and X 3 , they all produce approximately the same signal of total uncollected induced charge which is negative (for example, in the range of 72). It can be noted that all the events at Z 2 start at a greater distance from the cathode than for the event at Zi and consequently have a relatively more negative induced charge of a relatively small magnitude.
Figure 8 includes a graph 80 which represents the resulting uncollected or secondary signals for the events located at Z 3 . As seen in Figure 8, despite the fact that the group of events at Z 3 have different lateral locations X b X 2 , and X 3 , they all produce approximately the same signal of total uncollected induced charge which is negative (for example, in the range of 82). It can be noted that all the events at Z 3 start at a greater distance from the cathode than for the event at Z 2 (and ZJ and consequently have a relatively more negative induced charge of a relatively small magnitude. We can note that the difference between the upper and lower values for range 82 and range 72 (see Figure 7) are small enough to be ignored in various embodiments, so that the DOI is treated as independent of the lateral location .
Figure 9 includes a graph 90 which represents the resulting uncollected or secondary signals for the events located at Z 4 . The negative charges for events occurring at a depth Z 4 are significantly different from each other, due to the proximity of Z 4 to the anodes. Generally, in the example illustrated, as the depth of the event approaches the anode, the variability of the negative induced charge as a function of the lateral position increases, the variability becoming significant only at very close depths of the anode.
As discussed above, with the exception of events that start very close to a collection anode, the events produce a total induced charge not collected in one or more anodes adjacent to the collection anode which is correlated at the DOI of the event, substantially independent of the lateral position. Therefore, the total induced charge caused by an event on the adjacent pixelated anode (or pixelated anodes) that is zero or negative can be used to infer or determine the DOI of the particular event. It can further be noted that, due to the high absorption of the detector, very few events start near the anodes, and consequently such events can have a negligible effect on the use of a negative induced charge for determine the DOI. Various embodiments and methods described here consequently determine a quantity for a negative induced non-collected signal (also called here secondary signal), and use the value of amplitude magnitude determined of negative signal determined to determine or deduce the DOI.
As discussed above, in various embodiments, the DOI of an event can be deduced from a correlation between the DOI of the event and the total uncollected induced charge on the pixel (or the pixels) adjacent which is zero or negative, the correlation between the DOI and the total uncollected induced charge being substantially independent of the lateral position, so that the lateral position may not be taken into account to deduce the DOI. However, it should be noted that, for example, different photons can have different energies which can produce a different value for the total induced charge not collected. Consequently, in various embodiments, a detection system can be calibrated to take account of different photonic energies, for example to normalize the value of induced charge not collected in photonic energy. Such a calibration process can be performed to provide calibration information which is used to determine the DOI. The calibration information can be in the form of a look-up table, for example, or in the form of a formula or a mathematical expression depending on a curve fit, as another example. .
Figure 10 shows a calibration system 92 according to one embodiment. The calibration system 92 shown is used to calibrate a detector 93 having a side wall 94 which extends between an anode surface 95 and a cathode surface 96. The calibration system 92 includes a radiation source 97 and a pinhole collimator 98. The pinhole collimator 98 defines a scanning opening which can be moved along the Z direction as seen in Figure 10 to irradiate the side wall 94 of the detector 93 at different DOIs (different Z coordinates ). In this way, events with known DOIs and known photonic energy are created with different lateral positions, the lateral positions depending on the absorption statistics of the irradiation by the lateral wall. By measuring the resulting induced negative charges for different DOIs, the negative values of the total induced charge for adjacent uncollected signals can be used to create a lookup table or other relationship to deduce the DOI from an uncollected charge. induced.
It can also be noted that, since the negative value of the induced charge also depends on the energy of the absorbed photon, the calibration can also take account of the photonic energy. For example, the DOI can be calibrated based on a ratio between a negative value of the induced uncollected signal and the amplitude of the primary or collected signal. Such a ratio in various embodiments can be expressed as follows · V nn T [negative value of the induced charge}
UUI OC y [amplitude of the primary signal] Since the negative value of the induced signal is independent (or essentially or substantially independent as discussed here) of the lateral position (or of the X, Y coordinates), all adjacent or neighboring pixels will produce similar negative signals. Consequently, the signal-to-noise ratio can be improved by adding the negative signal from a number of adjacent or neighboring pixels. Such a relationship in various embodiments can be expressed by:
f = TV '
L V [negative value of the induced charge},
DOI ° c 1 y L [amplitude of the primary signal] It can be noted that the charge or the negative induced signal of an adjacent pixel and the primary signal in various embodiments are measured after conformers which are configured to set forms the received or acquired signals. In various embodiments, the two signals have a generally step shape and generally similar peak and shaping times. Consequently, the ratio between the signals can be approximately the same either after the shapers, or immediately after the amplifiers from which the shapers receive the signals.
Figure 11 provides a schematic view of a radiation detector assembly 100 in accordance with various embodiments. As seen in Figure 11, the radiation detector assembly 100 includes a semiconductor detector 110 and a processing unit 120. The semiconductor detector 110 has a surface 112 on which are arranged several pixelated anodes 114. In the embodiment shown, a cathode 142 is disposed on a surface opposite to the surface 112 on which the pixelated anodes 114. Are arranged. For example, a single cathode can be deposited on a surface with the pixelated anodes disposed on an opposite face. Generally, when radiation (for example, one or more photons) strikes the pixelated anodes 114, the semiconductor detector 110 generates electrical signals corresponding to the radiation penetrating through the surface of the cathode 142 and being absorbed in the volume of the detector 110 under the surface 112. In the illustrated embodiment, the pixelated anodes 114 are represented in a 5 × 5 array for a total of 25 pixelated anodes 114; however, it should be noted that other numbers or arrangements of pixelated anodes can be used in various embodiments. Each pixelated anode 114, for example, can have a specific surface of 2.5 square millimeters; however, other sizes and / or shapes can be employed in various embodiments.
The semiconductor detector 110 in various embodiments can be constructed using different materials, such as semiconductor materials, including zinc and cadmium telluride (CdZnTe), often called CZT, Cadmium telluride (CdTe), and silicon (Si), among others. Detector 110 can be configured for use, for example, with nuclear medical imaging (NM) systems, positron emission tomography (PET) imaging systems, and / or tomography imaging systems by single photon emission (SPECT).
In the illustrated embodiment, each pixelated anode 114 generates different signals depending on the lateral location (for example, in the directions X, Y) depending on where a photon is absorbed in the volume of the detector 110 below the surface 112. For example, each pixelized anode 114 generates a primary or collected signal in response to the absorption of a photon in the volume of the detector 110 under the particular pixelated anode 114 through which the photon enters the volume of the detector. The volumes of the detector 110 under the pixelated anodes 114 are defined as voxels (not shown). For each pixelated anode 114, the detector 110 has the corresponding voxel. The absorption of a photon by a certain voxel corresponding to a particular pixelated anode 114a also leads to an induced charge which can be detected by pixels 114b adjacent to or surrounding the particular pixelated anode 114a which receives the photon. The charge detected by an adjacent or surrounding pixel can be designated here as an uncollected charge, and lead to an uncollected or secondary signal. A primary signal can include information about photon energy (for example, distribution over a range of energy levels) as well as location information corresponding to the particular pixelized anode 114 at which a photon penetrates through. the surface of cathode 142 and is absorbed in the corresponding voxel.
For example, in Ligure 1, a photon 116 is shown striking the pixelated anode 114a to be absorbed in the corresponding voxel. Consequently, the pixelated anode 114a generates a primary signal in response to the reception of the photon 116. As also seen in Ligure 1, the pixelated anodes 114b are adjacent to the pixelated anode 114a. The pixelated anode 114a has 8 adjacent pixelated anodes 114b. When the pixelated anode 114a is struck by the photon 116, a charge is induced in and collected by the pixelated anode 114a to produce the primary signal. One or more pixelated anodes of the adjacent pixelated anodes 114b generate a secondary signal in response to the induced charge generated in and collected by the pixelated anode 114a, which produces the primary signal. The secondary signal has a smaller amplitude than the primary signal. For any given photon, the corresponding primary signal (from the impacted pixel) and secondary signals (from one or more pixels adjacent to the impacted pixel) can be used to locate the receiving point of a photon at a location particular within the pixel (for example, to identify a particular sub-pixel location within the pixel).
As seen in Figure 11, the side walls 140 extend along a depth 150 in the direction Z between the surface 112 and the cathode 142. The location along the direction Z along of the absorption depth 150 where the photon 116 is absorbed is the DOI for the corresponding event. As discussed here, the negative induced uncollected charge on one or more adjacent pixelated anodes 114b is used in the illustrated embodiment to determine the DOI for the event corresponding to the impact of photon 116.
Each pixelated anode 114 may have associated therewith one or more electronic channels configured to supply the primary and secondary signals to one or more aspects of the processing unit 120 in cooperation with the pixelated anodes. In certain embodiments, all or part of each electronic channel can be arranged on the detector 110. Alternatively or in addition, all or part of each electronic channel can be housed externally to the detector 110, for example, as part of the processing unit 120, which can be or include an application-specific integration circuit (ASIC). The electronic channels can be configured to provide the primary and secondary signals to one or more aspects of the processing unit 120 while rejecting other signals. For example, in some embodiments, each electronic channel includes a threshold discriminator. The threshold discriminator can allow signals exceeding a threshold level to be transmitted while preventing or inhibiting the transmission of signals that do not exceed a threshold level. Generally, the threshold level is set low enough to reliably capture the secondary signals, while still being set high enough to exclude lower intensity signals, for example due to noise. It will be noted that, since the secondary signals can be of relatively low intensity, the electronics used are preferably low noise electronics to reduce or eliminate the noise which is not eliminated by the threshold level. In some embodiments, each electronic channel includes a peak and hold unit for storing electrical signal energy, and may also include a readout mechanism. For example, the electronic channel may include a request acknowledgment mechanism for individually reading the peak and hold energy and the pixel location for each channel. Additionally, in some embodiments, the processing unit 120 or another processor may control the signal threshold level and the request acknowledgment mechanism.
In the illustrated embodiment, the processing unit 120 is functionally coupled to the pixelated anodes 114, and is configured to acquire primary signals (for the collected charges) and secondary signals (for the uncollected charges). For example, the processing unit 120 in various embodiments acquires a primary signal from one of the anodes in response to the reception of a photon by the anode. For example, a primary signal can be acquired from the pixelated anode 114a in response to reception of the photon 116. The processing unit 120 also acquires at least one secondary signal from at least one neighboring pixel (for example example, at least one adjacent anode 114b) in response to an induced charge caused by the reception of the photon. For example, a secondary signal may be acquired from one or more of the neighboring pixels 114b in response to reception of the photon 116. It may be noted that the secondary signal (or signals) and the primary signal generated in response to the reception of photon 116 can be associated with each other depending on the time and location of detection of the corresponding charges.
The processing unit 120 shown is also configured to determine an interaction depth (DOI) in the semiconductor detector 110 for receiving the photon using (for example, as a function of) at least a secondary signal. For example, a DOI along the depth 150 where the photon 116 is absorbed can be determined. In some embodiments, a total negative induced uncollected charge for the at least one secondary signal can be determined, and used to determine the DOI as discussed here. In various embodiments, a look-up table or other correlation can be used to determine the DOI from a total negative induced uncollected charge determined for the at least one secondary signal. It can be noted that, in various embodiments, the processing unit 120 determines the DOI using only signals generated as a function of information from the pixelated anodes 114, and without using any information from the cathode 142 Consequently, any hardware or electrical connection in the construction and / or assembly of the detector assembly 100 which would otherwise be necessary to acquire the signals from cathode 142 for use in determining the DOI. In addition, the requirements or the complexity of acquisition and / or processing can be further reduced by using the same information (primary and secondary signals) as discussed here to determine both the DOI and the location of the sub-pixels.
The determined DOI can be used to improve the quality of the image. For example, the determined DOI can be used to correct or adjust the acquired imaging information. In some embodiments, the processing unit 120 is configured to adjust an energy level for an event corresponding to the reception of a photon by an anode as a function of the DOI. It can be noted that the pressure drop for a detected event depends on the distance of the absorption for the event from the anode. Consequently, the DOI for a number of events can be used to adjust the pressure drop to make the energy levels for the events more uniform and / or closer to a photoelectric peak for precise identification of events. and precise counting of events.
Alternatively or in addition, the processing unit 120 can be configured to reconstruct an image using the DOI. For example, the DOI of a number of events can be used directly by a reconstruction technique to use 3D positioning of events in the detector for reconstruction. As another example, the DOI can be used indirectly by a reconstruction technique by using the DOI to correct the energy levels, and then using the corrected energy levels for the image reconstruction.
As discussed here, the calibration information is used in various embodiments. The processing unit 120 in various embodiments is configured to use calibration information (see, for example, Figure 10 and the discussion thereon) to determine the DOI. The calibration can be in the form of a look-up table or another relationship stored or otherwise associated with or accessible by the processing unit 120 (for example, stored in the memory 130). In some embodiments, the processing unit 120 is configured to determine the DOI using calibration based on a ratio between a negative value of a single secondary signal and an amplitude of the primary signal. (See Figure 10 and associated discussion.) As another example, in some embodiments, the processing unit 120 is configured to determine the DOI using calibration based on a ratio between a sum or a combination of negative values for several secondary signals (for example, signals from a number of adjacent pixels 114b) and an amplitude of the primary signal. (See Figure 10 and associated discussion.) In various embodiments, the processing unit 120 can also be configured to determine a sub-pixel location (for example, side placement) for events using the primary signal and at least one secondary signal in addition to determining the DOI. The sub-pixel location and the DOI can be determined using the same primary signal and at least one secondary signal, providing an efficient determination of both. For example, the processing unit 120 of the example shown is configured to define sub-pixels for each pixelized anode. It can be noted that the sub-pixels in the illustrated embodiment (represented as separated by dotted lines) are not physically separated, but are rather virtual entities defined by the processing unit 120. Generally, the use increasing numbers of sub-pixels per pixel improves resolution while increasing computational or processing requirements. The particular number of sub-pixels defined or used in a given application can be chosen according to a balance between an improved resolution and increased processing requirements. In various embodiments, the use of virtual sub-pixels as discussed herein provides improved resolution while avoiding or reducing the costs associated with an ever increasing number of ever smaller pixelated anodes.
In the illustrated embodiment, the pixelated anode 114a is shown as being divided into four sub-pixels, namely the sub-pixel 150, the sub-pixel 152, the sub-pixel 154 and the sub-pixel 156. While the sub-pixels are shown in Figure 11 for only the pixelized anode 114a for clarity and ease of illustration, it may be noted that the processing unit 120 in the illustrated embodiment also defines corresponding sub-pixels for each of the remaining pixelated anodes 114. As seen in FIG. 11, the photon 116 strikes part of the pixelized anode 114a defined by the virtual sub-pixel 150.
In the illustrated embodiment, the processing unit 120 acquires the primary signal for a given acquisition event (for example, the impact of a photon) from the pixelated anode 114a, with timing information (for example, timestamp) corresponding to a time of generation of the primary signal and location information identifying the pixelized anode 114a as being the pixelated anode corresponding to the primary signal. For example, an acquisition event such as a photon striking a pixelated anode 114 can lead to a certain number of counts occurring over a range or an energy spectrum, the primary signal including information describing the distribution of counts over the range or the spectrum of energies. The processing unit 120 also acquires one or more secondary signals for the acquisition of events from the pixelated anodes 114b, with timestamp information and location information for the secondary signal (s) ( s). The processing unit 120 then determines the location for the given acquisition event identifying the pixelated anode 114a as being the impacted pixelated anode 114a, then determining which of the sub-pixels 150, 152, 154, 156 define l impact location for the acquisition event. Using conventional methods, the location of subpixels 150, 152, 154, 156 can be inferred based on the location (for example, the associated pixelated anode) and the relationship between the intensities of the primary signal in the associated pixelated anode 114a and the secondary signal (s) in the adjacent pixelated anodes 114b for the acquisition event. The processing unit 120 may use timestamp information as well as location information to associate the primary signal and the secondary signals generated in response to the acquisition event given to each other, and to distinguish the signal primary and secondary signals for the acquisition event given signals for other acquisition events occurring during a collection or acquisition period using timestamp and location information. Consequently, the use of time stamp information makes it possible to distinguish the primary signal and its corresponding secondary signals from a random coincidence which may occur between primary signals of adjacent pixels, since the time stamps of the primary signal and its secondary signals correspondents are correlated for a particular acquisition event.
Additional discussion regarding virtual sub-pixels and the use of virtual sub-pixels, and the use of collected and uncollected charge signals can be found in US Patent Application Serial Number 14 / 724,022 , entitled “Systems and Method for Charge-Sharing Identification and Correction Using a Single Pixel,” filed May 28, 2015 (“022 request”); US Patent Application Serial Number 15 / 280,640, entitled "Systems and Methods for Sub-Pixel Location Determination", filed September 29, 2016 ("the 640 application"); US Patent Application Serial Number 14 / 627,436, entitled "Systems and Methods for Improving Energy Resolution by Sub-Pixel Energy Calibration", filed February 20, 2015 ("Application 436"). The subject of each request from request 022, request 640 and request 436 are incorporated by reference in its entirety.
In various embodiments, the processing unit 120 includes processing circuits configured to perform one or more tasks, functions or steps discussed here. It may be noted that the expression “processing unit” as used here is not necessarily intended to be limited to a single processor or computer. For example, the processing unit 120 can include several processors, ASICs, FPGAs and / or computers, which can be integrated into a common box or a common unit, or which can be distributed among various units or various boxes. It should be noted that the operations performed by the processing unit 120 (for example, the operations corresponding to process flows or processes discussed here, or aspects of them) can be complex enough that the operations cannot be performed by a human in a reasonable period of time. For example, the determination of values of charges collected and not collected, and / or the determination of DOI and / or locations of sub-pixels as a function of the charges collected and / or not collected within the time constraints associated with such signals can be based on or use calculations that cannot be performed by a person in a reasonable period of time.
The processing unit 120 shown includes a memory 130. The memory 130 can include one or more storage media readable by computer. The memory 130 can, for example, store cartographic information describing the locations of sub-pixels, the transmission information acquired, the image data corresponding to generated images, the results of intermediate processing steps, the parameters d or calibration information (eg, a look-up table correlating a negative induced charge value with DOI), or the like. In addition, the process flows and / or the flowcharts discussed here (or their aspects) can represent one or more sets of instructions which are stored in the memory 130 for the direction of operations of the radiation detection assembly 100.
Figure 12 provides a flow diagram of a method 200 (for example, for determining DOI), in accordance with various embodiments. Method 200, for example, can employ or be performed by structures or aspects of various embodiments (eg, systems and / or methods and / or process flows) discussed here. In various embodiments, some steps can be omitted or added, some steps can be combined, some steps can be performed simultaneously, some steps can be divided into multiple steps, some steps can be done in a different order, or some steps or series of steps can be performed again iteratively. In various embodiments, parts, aspects and / or variants of the method 200 may be able to be used as one or more algorithms for indicating to hardware (for example, one or more aspects of the processing unit 120) d '' carry out one or more operations described here.
In 202, primary signals and secondary signals are acquired corresponding to collection events (for example, events corresponding to the reception of photons). Primary and secondary signals are generated in response to receipt of photons by a semiconductor detector, and are received from pixelated anodes (for example, anodes of a semiconductor device in a d 'imagery such as the set 100). For example, a patient who has been administered at least one radiopharmaceutical may be placed within a field of vision of one or more detectors, and radiation (for example, photons) emitted by the patient may strike the anodes pixelized disposed on reception surfaces of the detector or detectors leading to acquisition events (for example, photon impacts). For a photon impact given in the exemplary embodiment shown, a primary signal (in response to a collected charge) is generated by the impacted pixelated anode (or collection anode), and one or more secondary signals (in response at an uncollected charge) are generated by the pixelized anodes adjacent to the impacted pixelated anode (or non-collection anodes).
In 204, an interaction depth (DOI) in the semiconductor device is determined for the acquisition events resulting from the primary and secondary signals acquired in 202. In various embodiments, the DOI for an event given is determined using at least one secondary signal for that particular event. For example, as discussed here, the DOI in various embodiments is determined based on a total negative induced uncollected charge value from one or more adjacent or non-collected pixels (for example, at least one anode adjacent pixelated). It can be noted that the DOI in various embodiments is determined without using any information (for example, a detected charge or corresponding signals) from a cathode of the detector assembly.
In various embodiments, the induced charge of total negative non-collection can be adjusted or corrected to take account of variations in the construction of the semiconductor and / or photonic energies. For example, in the embodiment shown at 206, calibration information is used to determine the DOI. As discussed here, in some embodiments, the DOI can be determined using calibration based on a ratio between a negative value of a single secondary signal and an amplitude of the primary signal, and in some embodiments, the DOI can be determined using calibration based on a ratio of a sum or combination of negative values for multiple secondary signals to an amplitude of the primary signal. (See Figure 10 and associated discussion.) In 208, a sub-pixel location is determined using the primary signal and the at least one secondary signal. For each event, a corresponding sub-pixel location can be determined. It should be noted that the same information (primary and secondary signals) used to determine the DOIs for events can also be used to determine locations of sub-pixels for these events.
In 210, an energy level for an event is adjusted according to the DOI. For example, since the energy detected can vary depending on the DOI, the DOI for each acquired event can be used to adjust the corresponding energy levels according to the corresponding DOI to make the energy levels for a group of events more consistent and / or closer to a target or other predetermined energy level.
In 212, an image is reconstructed using the DOI. For example, corrected energy levels from 210 can be used to reconstruct an image. As another example, the DOIs for events can be used to determine the 3D positioning information of these events within a detector, with the positioning information used to reconstruct an image.
As discussed here, a radiation detector system (for example, a system configured to determine the DOI using secondary signals corresponding to induced charges not collected) can be calibrated. Figure 13 provides a flow diagram of a method 300 (for example, for providing and calibrating a radiation detector assembly), in accordance with various embodiments. Method 300, for example, can employ or be performed by structures or aspects of various embodiments (eg, systems and / or methods and / or process flows) discussed herein. In various embodiments, some steps can be omitted or added, some steps can be combined, some steps can be performed simultaneously, some steps can be divided into multiple steps, some steps can be done in a different order, or some steps or series of steps can be performed again iteratively. In various embodiments, parts, aspects and / or variants of the method 300 may be suitable for use as one or more algorithms for indicating to hardware (for example, one or more aspects of the processing unit 120) d '' carry out one or more operations described here.
In 302, a semiconductor detector (for example, the semiconductor detector 110 of the radiation imaging assembly 100) is proposed. The semiconductor detector of the example shown has a surface with several pixelated anodes disposed on the surface. Each pixelated anode is configured to generate a primary signal in response to reception of a photon by the pixelated anode and to generate at least one secondary signal in response to an induced charge caused by the reception of a photon by at least one adjacent anode. In 304, the pixelated anodes are functionally coupled to at least one processor (for example, the processing unit 120).
In 306, a calibrated radiation source (for example, having known photonic energy) is provided at different depths along a side wall of the semiconductor detector. In response to receiving the calibrated radiation supply, the pixelated anodes generate primary and secondary signals. For example, the calibrated radiation supply can be passed through a pinhole collimator to the side wall of the semiconductor detector. The location of a given pinhole camera (for example, in a Z direction) through which the radiation is passed can be used to determine the DOI to which the corresponding radiation is passed from the collimator and which is received by the semiconductor detector. In 308, the primary and secondary signals are acquired from the pixelated anodes by the at least one processor.
In 310, corresponding negative values of total induced charges for each of the different depths to which the radiation has been supplied are determined. At 312, calibration information (eg, lookup table or other correlation relationship between DOIs and negative induced uncollected charge values) is determined.
Figure 14 is a schematic representation of an NM 1000 imaging system having a plurality of sets of imaging detection heads mounted on a gantry (which can be mounted, for example, in rows, in shape iris, or other configurations, such as a configuration in which the movable detector supports 1016 are aligned radially toward the body of the patient 1010). In particular, a plurality of imaging detectors 1002 are mounted on a gantry 1004. In the illustrated embodiment, the imaging detectors 1002 are configured as two separate detection networks 1006 and 1008 coupled to the gantry 1004 above and below a subject 1010 (for example, a patient), as shown in Figure 14. The detection networks 1006 and 1008 can be directly coupled to the portal 1004, or can be coupled via support elements 1012 at the gantry 1004 to allow the displacement of the networks 1006 and / or 1008 as a whole with respect to the gantry 1004 (for example, the displacement of transverse translation to the left or the right as shown by the arrow T in the Figure 14). In addition, each of the imaging detectors 1002 includes a detection unit 1014, at least some of which are mounted on a movable detector holder 1016 (for example, a support arm or actuator that can be driven by a motor to cause movement of this) which extends from the gantry 1004. In certain embodiments, the detector-holders 1016 allow the movement of the detection units 1014 towards and away from the subject 1010, for example in a linear fashion . Thus, in the illustrated embodiment, the detection networks 1006 and 1008 are mounted in parallel above and below the subject 1010 and allow the linear movement of the detection units 1014 in one direction (indicated by the arrow L), illustrated as being perpendicular to the support element 1012 (which are generally coupled horizontally on the gantry 1004). However, other configurations and orientations are possible as described here. It should be noted that the mobile detector holder 1016 can be any type of support allowing the movement of the detection units 1014 relative to the support element 1012 and / or to the gantry 1004, which allows, in various embodiments , the detection units 1014 to move linearly towards and away from the support element 1012.
Each of the imaging detectors 1002 in various embodiments is smaller than a whole body imaging detector or conventional general imaging. A conventional imaging detector may be large enough to obtain an image of most or all of a width of a patient's body at one time and may have a diameter or a larger dimension of approximately 50 cm or more. On the other hand, each of the imaging detectors 1002 can include one or more detection units 1014 coupled to a respective detector holder 1016 and having dimensions, for example, from 4 cm to 20 cm and can be formed of tiles or modules zinc and cadmium telluride (CZT). For example, each of the detection units 1014 can measure 8 × 8 cm and be made up of a plurality of pixelated CZT modules (not shown). For example, each module can measure 4x4 cm and have 16x 16 = 256 pixels. In some embodiments, each detection unit 1014 includes a plurality of modules, such as a network of 1 x 7 modules. However, various network configurations and sizes are contemplated, including, for example, 1014 sensing units having multiple rows of modules.
It should be understood that the imaging detectors 1002 can be of different sizes and / or shapes from one another, such as square, rectangular, circular or another shape. An actual field of view (FOV) of each of the imaging detectors 1002 can be directly proportional to the size and shape of the respective imaging detector.
The gantry 1004 can be formed with an opening 1018 (for example, an opening or a bore) through the latter as illustrated. A patient table 1020, such as a patient bed, is configured with a support mechanism (not shown) for supporting and transporting subject 1010 in one or more of a plurality of viewing positions within the patient. opening 1018 and relative to the imaging detectors 1002. Alternatively, the gantry 1004 can comprise a plurality of gantry segments (not shown), each of which can independently move a support element 1012 or one or more of the imaging detectors 1002.
Portal 1004 can also be configured in other forms, such as a "C", an "H" and an "L", for example, and may be able to rotate around the subject 1010. For example , the portal 1004 can be made in the form of a ring or a closed circle, or an open arc or an arch which allows easy access to the subject 1010 during imaging and facilitates loading and unloading of subject 1010, and reduced claustrophobia in some 1010 subjects.
Additional imaging detectors (not shown) can be positioned to form rows of detector arrays or an arc or ring around the subject 1010. By positioning several imaging detectors 1002 at several positions relative to the subject 1010, such as along an imaging axis (for example, the direction from head to toe of subject 1010), specific image data of a larger FOV can be acquired more quickly.
Each of the imaging detectors 1002 has a radiation detection face, which is directed towards the subject 1010 or a region of interest inside the subject.
In various embodiments, multi-bore collimators can be constructed to be recorded with the pixels of the detection units 1014, which, in one embodiment, are CZT detectors. However, other materials can be used. Recorded collimation can improve spatial resolution by forcing photons to pass through a hole to be collected primarily by a pixel. In addition, recorded collimation can improve the sensitivity and energy response of pixelated detectors, as the detector area near the edges of a pixel or between two adjacent pixels may have reduced sensitivity or reduced energy resolution or d 'other performance degradations. Having the collimator septa directly above the pixel edges reduces the risk of a photon striking at these degraded locations, without reducing the overall probability that a photon will pass through the collimator.
A control unit 1030 can control the movement and positioning of the patient table 1020, imaging detectors 1002 (which can be configured as one or more arms), gantry 1004 and / or collimators 1022 ( which move with the imaging detectors 1002 in various embodiments, being coupled to it). A range of motion before or during an acquisition, or between different image acquisitions, is adjusted to keep the real FOV of each of the imaging detectors 1002 directed, for example, towards or "in a targeted manner towards" a particular area or region of subject 1010 or along the whole of subject 1010. The movement can be a combined or complex movement in multiple directions simultaneously, concomitantly, or sequentially as described in more detail here.
The control unit 1030 can have a gantry motor control device 1032, a table control device 1034, a detector control device 1036, a pivoting control device 1038 and a control device collimator 1040. The control devices 1030, 1032, 1034, 1036, 1038, 1040 can be controlled automatically by a processing unit 1050, controlled manually by an operator, or a combination thereof. The portal motor controller 1032 can move the imaging sensors 1002 relative to the subject 1010, for example, individually, in segments or sub-assemblies, or simultaneously in a fixed relationship to each other. For example, in some embodiments, the portal control device 1032 may cause the imaging detectors 1002 and / or the support members 1012 to move relative to or rotate around the subject 1010, which may include a movement of less than or up to 180 degrees (or more).
The table control device 1034 can move the patient table 1020 to position the subject 1010 relative to the imaging detectors 1002. The patient table 1020 can be moved in upward and downward directions, in inward and outward directions, and in left and right directions, for example. The detector controller 1036 can control the movement of each of the imaging detectors 1002 to move together as a group or individually as described in more detail here. The detector controller 1036 can also control the movement of the imaging detectors 1002 in certain embodiments so that they move towards and away from a surface of the subject 1010, such as by controlling the translational movement of the detector-holder 1016 linearly towards or away from the subject 1010 (for example, sliding or telescopic displacement). Optionally, the detector controller 1036 can control the movement of the detector holders 1036 to allow movement of the detector array 1006 or 1008. For example, the detector controller 1036 can control the lateral movement of the detector holders 1016 illustrated by the arrow T (and represented on the left and on the right as seen in Figure 14). In various embodiments, the detector controller 1036 can control the detector holders 1016 or the support members 1012 to move in different lateral directions. The detector controller 1036 can control the oscillation movement of the detectors 1002 in conjunction with their collimators 1022.
The pivoting control device 1038 can control the pivoting or rotating movement of the detection units 1014 at the ends of the detector holders 1016 and / or the pivoting or rotating movement of the detector holder 1016. For example, one or more of the detection units 1014 or detector carriers 1016 can be rotated around at least one axis to visualize the subject 1010 from a plurality of angular orientations to acquire, for example, 3D image data in a mode of operation of 3D imaging
SPECT or 3D. The collimator controller 1040 can adjust a position of an adjustable collimator, such as a collimator with adjustable bands (or blades) or adjustable pinhole (s).
It should be noted that the movement of one or more imaging detectors 1002 can be done in directions other than strictly axially or radially, and movements in several directions of movement can be used in various embodiments. For this reason, the expression "motion control device" can be used to indicate a collective name for all motion control devices. It should be noted that the different control devices can be combined, for example, the detector control device 1036 and the pivot control device 1038 can be combined to provide the different displacements described here.
Prior to the acquisition of an image of the subject 1010 or of a part of the subject 1010, the imaging detectors 1002, the gantry 1004, the patient table 1020 and / or the collimators 1022 can be adjusted, for example as for the first imaging position or the initial imaging position, as well as for the subsequent imaging positions. The imaging detectors 1002 can each be positioned to obtain an image of part of the subject 1010. Alternatively, for example in the case of a small subject 1010, one or more of the imaging detectors 1002 may not be used to acquire data, such as imaging sensors 1002 at the ends of detector networks 1006 and 1008, which, as shown in Figure 14, are in a retracted position away from the subject 1010. Positioning can be performed manually by the operator and / or automatically, which may include the use, for example, of image information such as other images acquired before the acquisition in progress, for example by another imaging modality such as computed tomography (CT), MRI, X-rays, PET or ultrasound. In some embodiments, additional positioning information, such as other images, can be acquired by the same system, as in a hybrid system (eg, a SPECT / CT system). In addition, the detection units 1014 can be configured to acquire non-NM data, such as computed tomography data. In some embodiments, a multimodal imaging system may be proposed, for example, to allow NM or SPECT imaging, as well as computed tomography, which may include a dual modality or portal design as described in more detail here .
After the imaging detectors 1002, the portal 1004, the patient table 1020 and / or the collimators 1022 are positioned, one or more images, such as three-dimensional (3D) SPECT images, are acquired using one or more of the imaging detectors 1002, which may include the use of a combined motion which reduces or minimizes the spacing between the detection units 1014. The image data acquired by each imaging detector 1002 can be combined and reconstructed into a composite image or 3D images in various embodiments.
In one embodiment, at least one of the detector networks 1006 and / or 1008, the gantry 1004, the patient table 1020 and / or the collimators 1022 are moved after being initially positioned, which includes an individual movement of one or more of the detection units 1014 (for example, a combined lateral and pivoting movement) together with the oscillating movement of the detectors 1002. For example, at least one of the networks of detectors 1006 and / or 1008 can be moved sideways while rotating. Thus, in various embodiments, a plurality of small detectors, such as the detection units 1014, can be used for 3D imaging, for example when moving or scanning the detection units 1014 in combination with d 'other trips.
In various embodiments, a data acquisition system (DAS) 1060 receives electrical signal data produced by the imaging detectors 1002 and converts this data into digital signals for further processing.
However, in various embodiments, the digital signals are generated by the imaging detectors 1002. An image reconstruction device 1062 (which may be a computer or processing device) and a data storage device 1064 can be provided in addition to the processing unit 1050. It should be noted that one or more functions linked to one or more elements among the data acquisition, motion control, data processing and image reconstruction can be accomplished by shared hardware, software and / or processing resources, which may be located within or near the imaging system 1000, or may be located remotely. In addition, a user input device 1066 may be provided to receive user inputs (e.g., control instructions), as well as a display 1068 for displaying images. The DAS 1060 receives the images acquired from the detectors 1002 together with the corresponding lateral, vertical, rotation and oscillation coordinates of the gantry 1004, the support elements 1012, the detection units 1014, the detector detectors 1016, and the detectors 1002 for the precise reconstruction of an image including 3D images and their sections.
It can be noted that the embodiment of Figure 14 can be understood as a linear arrangement of detection heads (for example, using detection units arranged in a row and extending parallel to each other. In other embodiments, a radial design can be employed, for example, radial designs can provide additional benefits in terms of efficient imaging of smaller objects, such as limbs, heads or Figure 15 is a schematic view of a nuclear medical multi-head (NM) imaging system 1100 according to various embodiments. Generally, the imaging system 1100 is configured to acquire imaging information (eg , photon counts) from an object for which an image must be obtained (for example, a human patient) to which a radioactive pharmaceutical product has been administered The imaging system 1100 shown includes a portal 1110 having a bore 1112 therethrough, several sets of radiation detection heads 1115, and a processing unit 1120.
The gantry 1110 defines the bore 1112. The bore 1112 is configured to accept an object for which an image must be obtained (for example, a human patient or a part of it). As seen in Figure 15, several sets of radiation detection heads 1115 are mounted on the gantry 1110. In the illustrated embodiment, each set of radiation detection heads 1115 includes an arm 1114 and a head 1116. The arm 1114 is configured to articulate the head 1116 radially in the direction and / or away from a center of the bore 1112 (and / or in other directions), and the head 1116 includes at least one detector, the head 1116 being disposed at one end radially inward of the arm 1114 and configured to pivot to provide a range of positions from which imaging information is acquired.
The detector of the head 1116 can be, for example, a semiconductor detector. For example, a semiconductor detector of various embodiments can be constructed using different materials, such as semiconductor materials, including zinc and cadmium telluride (CdZnTe), often called CZT, Cadmium telluride (CdTe ), and silicon (Si), among others. The detector can be configured to be used, for example, with nuclear medical imaging (NM) systems, positron emission tomography (PET) imaging systems, and / or tomography imaging systems. single-photon emission (SPECT).
In various embodiments, the detector may include an array of pixelated anodes, and may generate different signals depending on the location where a photon is absorbed in the volume of the detector beneath a surface of the detector. The detector volumes under the pixelated anodes are defined as voxels. For each pixelated anode, the detector has a corresponding voxel. The absorption of photons by certain voxels corresponding to particular pixelated anodes results in generated charges which can be counted. Counts can be correlated to particular locations and used to reconstruct an image.
In various embodiments, each set of detection heads 1115 can define a corresponding view which is oriented towards the center of the bore 1112. Each set of detection heads 1115 in the illustrated embodiment is configured to acquire imaging information over a scanning range corresponding to the view of the given detection unit. Additional details regarding examples of systems with detection units arranged radially around a bore can be found in US Patent Application Serial Number 14 / 788,180, filed June 30, 2015, titled "Systems and Methods For Dynamic Scanning With Multi-Head Camera ”, the object of which is incorporated by reference in its entirety.
The processing unit 1120 includes a memory 1122. The imaging system 1100 is shown as including a single processing unit 1120; however, the block for the processing unit 1120 can be understood as representing one or more processors which can be distributed or distant from each other. The processing unit 1120 shown includes processing circuits configured to perform one or more tasks, functions or steps discussed here. It may be noted that the expression “processing unit” as used here is not necessarily intended to be limited to a single processor or computer. For example, the processing unit 1120 can include several processors and / or computers, which can be integrated into a housing or a common unit, or which can be distributed among various units or various housings.
Generally, different aspects (for example, programmed modules) of the processing unit 1120 act individually or in cooperation with other aspects to carry out one or more aspects of the methods, steps or processes discussed here. In the embodiment shown, the memory 1122 includes a tangible non-transient computer readable medium having stored thereon instructions for performing one or more aspects of the methods, steps or processes discussed herein.
It should be noted that the various embodiments can be implemented in hardware, software or a combination of these. The various embodiments and / or components, for example, modules, or components and control devices therein, can also be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet. The computer or processor may include a microprocessor. The microprocessor can be connected to a communication bus. The computer or processor can also include memory. The memory can include a random access memory (RAM) and a read only memory (ROM). The computer or processor may further include a storage device, which may be a hard disk drive or a removable storage drive such as an SSD disk, an optical disk drive, and the like. The storage device can also be other similar means for loading computer programs or other instructions into the computer or the processor.
As used herein, the term "computer" or "module" can include any system based on processor or microprocessor including systems using microdevices, command instruction reduced calculators (RISC) , ASICs, logic circuits, and any other circuit or processor capable of performing the functions described here. The above examples are given by way of example only, and are therefore not intended to limit in any way the definition and / or meaning of the term "computer".
The computer or the processor executes a set of instructions which are stored in one or more storage elements, in order to process input data. The storage elements can also store data or other information at will or as needed. The storage element can be in the form of an information source or a physical memory element within a processing machine.
The set of instructions can include various commands which require the computer or the processor as a processing machine to perform specific operations such as the methods and processes of the various embodiments. The set of instructions can be in the form of a software program. Software can take various forms such as system software or application software and which can be produced in the form of a tangible and non-transient computer readable medium. In addition, software can be in the form of a set of separate programs or modules, a software program within a larger program, or part of a program module. Software can also include modular programming in the form of object-oriented programming. The processing of input data by the processing machine can be done in response to operator commands, or in response to previous processing results, or in response to a request made by another processing machine.
[0105] As used herein, a structure, limitation or element which is "configured to" perform a task or operation is particularly structurally formed, constructed, or adapted in a manner corresponding to the task or to the surgery. For clarity and to avoid any doubt, an object that is simply capable of being modified to perform the task or operation is not "configured to" perform the task or operation as used here. Instead, the use of the term "configured for" as used herein means structural adaptations or features, and designates the structural requirements of any structure, limitation or element which is described as "Configured to" perform the task or operation. For example, a processing unit, processor, or computer that is "configured to" perform a task or operation can be understood to be particularly structured to perform the task or operation (for example, having one or more programs or instructions stored therein or used in conjunction therewith designed for or intended to perform the task or operation, and / or having a processing circuit arrangement designed for or intended to perform the task or operation). For clarity and to avoid any doubt, a general-purpose computer (which can become "configured to" perform the task or operation if it is properly programmed) is not "configured to" perform a task or operation unless it is specifically programmed or structurally modified or until it is specifically programmed or structurally modified to perform the task or operation.
As used herein, the terms "software" and "firmware" are interchangeable, and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, memory EPROM, EEPROM memory, and non-volatile RAM memory (NVRAM). The above memory types are only examples, and are therefore not limiting as to the types of memory that can be used for storing a computer program.
It is understood that the description above is intended to be illustrative, and not restrictive. For example, the embodiments (and / or aspects thereof) described above can be used in combination with each other. In addition, numerous modifications can be made to adapt a particular situation or a particular material to the teachings of the various embodiments without deviating from their scope. Although the dimensions and the types of materials described here are intended to define the parameters of the various embodiments, they are in no way limiting and are only by way of example. Many other embodiments will appear to a person skilled in the art on reading the description above. The scope of the various embodiments must therefore be determined with reference to the appended claims, as well as with the full scope of the equivalents to which such claims are entitled. In the appended claims, the terms "including" and "in which / which / which / which" are used as simple English equivalents of the respective terms "comprising" and "where". In addition, in the following claims, the terms "first / first / first / first", "second / second", and "third / third", etc. are only used as names, and are not intended to impose numerical limits on their objects. In addition, the limitations of the following claims are not written in medium-plus-function format and are not intended to be interpreted on the basis of Section 112 (f) of Title 35 of the United States Code, unless that these limitations do not expressly use the phrase "means for" followed by a declaration of function without any other structure.
This written description uses examples to disclose the various embodiments, including the best mode, as well as to allow any person skilled in the art to practice the various embodiments, including the realization and the use. of all devices or systems and the execution of all incorporated processes. The patentable scope of the various embodiments is defined by the claims, and may include other examples which come to the mind of those skilled in the art. Such other examples are intended to fall within the scope of the claims if the examples have structural elements which do not differ from the literal language of the claims, or the examples include equivalent structural elements with insubstantial differences from the literal wording of the claims .
权利要求:
Claims (1)
[1" id="c-fr-0001]
claims [Claim 1] A radiation detector assembly comprising: a semiconductor detector having a surface;a plurality of pixelated anodes disposed on the surface, each pixelated anode being configured to generate a primary signal in response to reception of a photon by the pixelated anode and to generate at least one secondary signal in response to an induced charge caused by reception a photon through at least one surrounding anode; and at least one processor operatively coupled to the pixelated anodes, the at least one processor being configured to:acquiring a primary signal from one of the anodes in response to reception of a photon by this anode of the anodes;acquiring at least one secondary signal from at least one pixel neighboring this anode of the anodes in response to an induced charge caused by the reception of the photon by this anode of the anodes; and determining an interaction depth in the semiconductor detector for the reception of the photon by this anode of the anodes using the at least one secondary signal. [Claim 2] Detector assembly according to claim 1, in which the at least one processor is configured to adjust an energy level for an event corresponding to the reception of the photon by this anode of the anodes as a function of the depth of interaction. [Claim 3] The detector assembly of claim 1, wherein the at least one processor is configured to reconstruct an image using the depth of interaction. [Claim 4] The detector assembly of claim 1, wherein the at least one neighboring pixel includes at least one adjacent anode. [Claim 5] The detector assembly of claim 1, wherein the at least one processor is configured to use calibration information to determine the depth of interaction. [Claim 6] The detector assembly of claim 5, wherein the at least one processor is configured to determine the depth of interaction using a calibration based on a ratio between a negative value of a single secondary signal and an amplitude of the primary signal . [Claim 7] The detector assembly of claim 5, wherein the at least one processor is configured to determine the depth of interaction using a calibration based on a ratio between a sum of
[Claim 8] [Claim 9] [Claim 10] [Claim 11] [Claim 12] [Claim 13] [Claim 14] [Claim 15] negative values for several secondary signals and an amplitude of the primary signal.
The detector assembly of claim 1, wherein the at least one processor is configured to determine the depth of interaction without using any information from a cathode of the detector assembly.
The detector assembly of claim 1, wherein the at least one processor is configured to determine a sub-pixel location using the primary signal and the at least one secondary signal. An imaging method using a semiconductor detector having a surface with multiple pixelated anodes disposed thereon, wherein each pixelated anode is configured to generate a primary signal in response to reception of a photon by the pixelated anode and for generating at least one secondary signal in response to an induced charge caused by the reception of a photon by at least one surrounding anode, the method comprising: acquiring a primary signal from one of the anodes by response to the reception of a photon by this anode of the anodes;
the acquisition of at least one secondary signal from at least one pixel neighboring this anode of the anodes in response to an induced charge caused by the reception of the photon by this anode of the anodes; and determining an interaction depth in the semiconductor detector for the reception of the photon by this anode of the anodes using the at least one secondary signal.
The method of claim 10, further comprising adjusting an energy level for an event corresponding to the reception of the photon by this anode of the anodes as a function of the depth of interaction.
The method of claim 10, further comprising reconstructing an image using the depth of interaction.
The method of claim 10, wherein the at least one neighboring pixel includes at least one adjacent anode.
The method of claim 10, further comprising using calibration information to determine the depth of interaction. The method of claim 14, further comprising determining the depth of interaction using a calibration based on a ratio between a negative value of a single secondary signal and an amplitude of the primary signal.
类似技术:
公开号 | 公开日 | 专利标题
US9632186B2|2017-04-25|Systems and methods for sub-pixel location determination
EP3304129B1|2021-06-02|Systems and methods for charge-sharing identification and correction using a single pixel
US9625310B2|2017-04-18|Systems and methods for sorting and summing signals from an imaging detector
JP6309296B2|2018-04-11|Computer tomography apparatus, calibration program, and photon number calibration apparatus
KR102011613B1|2019-08-16|Digital detector
US9678220B2|2017-06-13|X-ray detector with saturated sensor element estimated photon counting
US6674083B2|2004-01-06|Positron emission tomography apparatus
EP2751594B1|2018-08-08|Detection apparatus for detecting photons taking pile-up events into account
EP2397870B1|2013-04-03|Photon radiation detection device, and procedure for dimensioning and operating such a device
FR3084933A1|2020-02-14|Interaction depth determination systems and methods
US20100282972A1|2010-11-11|Indirect radiation detector
US20170269240A1|2017-09-21|Systems and methods for improving imaging by sub-pixel calibration
EP2581733B1|2015-06-24|Portable and versatile gamma or x-ray imaging device for non-destructive testing of suspicious packages incorporating the transmission and backscatter imaging techniques
CN110462442A|2019-11-15|Realize the photon-counting detector being overlapped
US20160206255A1|2016-07-21|Hybrid passive/active multi-layer energy discriminating photon-counting detector
JP2020514740A|2020-05-21|Pixel design for use in radiation detectors
KR20190085740A|2019-07-19|Apparatus for tomography imaging, method for controlling the same, and computer program product
US10591619B2|2020-03-17|Anodes for improved detection of non-collected adjacent signals
JP4929448B2|2012-05-09|Tomography equipment
JP4344038B2|2009-10-14|PET equipment
FR3085074A1|2020-02-21|Anodes for improved detection of adjacent uncollected signals
US10976452B2|2021-04-13|Systems and methods for improved medical imaging
FR2983309A1|2013-05-31|GAMMA-CAMERA SYSTEM WITH HIGH SENSITIVITY AND HIGH SPACE RESOLUTION
Alhassen et al.2010|Depth-of-interaction compensation using a focused-cut scintillator for a pinhole gamma camera
WO2022037763A1|2022-02-24|Methods and systems for coincidence detection in x-ray detectors
同族专利:
公开号 | 公开日
CN110824530A|2020-02-21|
DE102019121702A1|2020-02-13|
US10481285B1|2019-11-19|
FR3084933B1|2021-12-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US4421986A|1980-11-21|1983-12-20|The United States Of America As Represented By The Department Of Health And Human Services|Nuclear pulse discriminator|
GB2208925A|1987-08-25|1989-04-19|Le N Proizv Ob Burevestnik|Multichannel x-ray spectrometer|
US5148809A|1990-02-28|1992-09-22|Asgard Medical Systems, Inc.|Method and apparatus for detecting blood vessels and displaying an enhanced video image from an ultrasound scan|
US5273910A|1990-08-08|1993-12-28|Minnesota Mining And Manufacturing Company|Method of making a solid state electromagnetic radiation detector|
US5245191A|1992-04-14|1993-09-14|The Board Of Regents Of The University Of Arizona|Semiconductor sensor for gamma-ray tomographic imaging system|
US5376796A|1992-11-25|1994-12-27|Adac Laboratories, Inc.|Proximity detector for body contouring system of a medical camera|
CA2110148C|1992-12-24|1999-10-05|Aaron Fenster|Three-dimensional ultrasound imaging system|
GB9314398D0|1993-07-12|1993-08-25|Gen Electric|Signal processing in scintillation cameras for nuclear medicine|
DE69522844T2|1994-02-16|2002-05-16|Koninkl Philips Electronics Nv|Image processing method for the local determination of the center and the objects which stand out against a background, and device for carrying out the method|
US5754690A|1995-10-27|1998-05-19|Xerox Corporation|Position sensitive detector based image conversion system capable of preserving subpixel information|
US5825033A|1996-10-31|1998-10-20|The Arizona Board Of Regents On Behalf Of The University Of Arizona|Signal processing method for gamma-ray semiconductor sensor|
US6744912B2|1996-11-29|2004-06-01|Varian Medical Systems Technologies, Inc.|Multiple mode digital X-ray imaging system|
IL119875A|1996-12-20|1999-12-31|Israel State|Technique for obtaining sub-pixel spatial resolution and corrected energy determinsation from room temperature solid state gamma and x-ray detectors with segmented readout|
US6169287B1|1997-03-10|2001-01-02|William K. Warburton|X-ray detector method and apparatus for obtaining spatial, energy, and/or timing information using signals from neighboring electrodes in an electrode array|
US5847398A|1997-07-17|1998-12-08|Imarad Imaging Systems Ltd.|Gamma-ray imaging with sub-pixel resolution|
IL122357D0|1997-11-27|1998-04-05|Elgems Ltd|Calibration of pet cameras|
US6239438B1|1998-11-19|2001-05-29|General Electric Company|Dual acquisition imaging method and apparatus|
US6535229B1|1999-06-29|2003-03-18|International Business Machines Corporation|Graphical user interface for selection of options within mutually exclusive subsets|
US6388244B1|2000-03-20|2002-05-14|Philips Medical Systems , Inc.|Virtual contouring for transmission scanning in spect and pet studies|
WO2001087140A2|2000-05-16|2001-11-22|Crosetto Dario B|Method and apparatus for anatomical and functional medical imaging|
IL137821A|2000-08-10|2009-07-20|Ultraspect Ltd|Spect gamma camera|
DE60238842D1|2001-08-24|2011-02-17|Mitsubishi Heavy Ind Ltd|RADIOLOGICAL TREATMENT DEVICE|
US20030128324A1|2001-11-27|2003-07-10|Woods Daniel D.|Pixel size enhancements|
US6618185B2|2001-11-28|2003-09-09|Micronic Laser Systems Ab|Defective pixel compensation method|
CA2472981A1|2002-01-08|2003-07-17|Pem Technologies, Inc.|Open-access emission tomography scanner|
CA2474450A1|2002-02-01|2003-08-07|Board Of Regents, The University Of Texas System|Asymmetrically placed cross-coupled scintillation crystals|
US6748044B2|2002-09-13|2004-06-08|Ge Medical Systems Global Technology Company, Llc|Computer assisted analysis of tomographic mammography data|
US7187790B2|2002-12-18|2007-03-06|Ge Medical Systems Global Technology Company, Llc|Data processing and feedback method and system|
US7490085B2|2002-12-18|2009-02-10|Ge Medical Systems Global Technology Company, Llc|Computer-assisted data processing system and method incorporating automated learning|
GB0312776D0|2003-06-04|2003-07-09|Imaging Res Solutions Ltd|Generating detector efficiency estimates for a pet scanner|
FR2864731B1|2003-12-30|2006-02-03|Commissariat Energie Atomique|RADIATION DETECTION SYSTEM FOR BETTER EVENT COUNTING|
US7026623B2|2004-01-07|2006-04-11|Jacob Oaknin|Efficient single photon emission imaging|
EP1844351A4|2005-01-13|2017-07-05|Biosensors International Group, Ltd.|Multi-dimensional image reconstruction and analysis for expert-system diagnosis|
US9943274B2|2004-11-09|2018-04-17|Spectrum Dynamics Medical Limited|Radioimaging using low dose isotope|
US7508509B2|2004-05-04|2009-03-24|Metso Automation Oy|Measurement of an object from an image consisting of a pixel matrix|
US7218702B2|2004-05-10|2007-05-15|Wisconsin Alumni Research Foundation|X-ray system for use in image guided procedures|
WO2005118659A2|2004-06-01|2005-12-15|Spectrum Dynamics Llc|Methods of view selection for radioactive emission measurements|
DE112005002398T5|2004-09-30|2007-08-16|Stanford University, Palo Alto|High-resolution semiconductor crystal imager|
DE102004048962B4|2004-10-07|2006-09-21|Siemens Ag|Digital x-ray imaging device or method for recording x-ray images in a digital x-ray imaging device|
US7233002B2|2004-11-25|2007-06-19|Ultraspect Ltd.|SPECT gamma camera with a fixed detector radius of orbit|
US7265356B2|2004-11-29|2007-09-04|The University Of Chicago|Image-guided medical intervention apparatus and method|
CN1879553B|2005-06-15|2010-10-06|佳能株式会社|Method and device for detecting boundary of chest image|
DE102005032963B4|2005-07-14|2014-09-18|Siemens Aktiengesellschaft|Method and device for displaying a plurality of examination areas of an examination object as well as at least one of a related to the intracorporeal influence of an agent information|
JP4371083B2|2005-07-25|2009-11-25|株式会社島津製作所|Three-dimensional image reconstruction processing method and apparatus for positron CT apparatus|
US7332724B2|2005-07-26|2008-02-19|General Electric Company|Method and apparatus for acquiring radiation data|
US7297956B2|2005-07-26|2007-11-20|General Electric Company|Methods and apparatus for small footprint imaging system|
US7668288B2|2005-08-15|2010-02-23|Digirad Corporation|Discrete sampling of gamma ray field over multiple portions using multiple heads with spaces between the different portions|
US7381959B2|2005-08-17|2008-06-03|General Electric Company|Technique for reconstructing PET scan images|
US7411197B2|2005-09-30|2008-08-12|The Regents Of The University Of Michigan|Three-dimensional, position-sensitive radiation detection|
US7601966B2|2006-06-28|2009-10-13|Spectrum Dynamics Llc|Imaging techniques for reducing blind spots|
US7671331B2|2006-07-17|2010-03-02|General Electric Company|Apparatus and methods for processing imaging data from multiple detectors|
US7592597B2|2006-08-03|2009-09-22|Ge Healthcare Israel|Method and apparatus for imaging with imaging detectors having small fields of view|
US9072441B2|2006-08-08|2015-07-07|Ge Medical Systems Israel, Ltd.|Method and apparatus for imaging using multiple imaging detectors|
US7692156B1|2006-08-23|2010-04-06|Radiation Monitoring Devices, Inc.|Beam-oriented pixellated scintillators for radiation imaging|
US8183532B2|2006-09-21|2012-05-22|Koninklijke Philips Electronics N.V.|Cardiac SPECT system with trajectory optimization|
US20080092074A1|2006-10-17|2008-04-17|International Business Machines Corporation|Ascribing visual metadata to workflow components modeled using graphical icons|
US7528377B2|2006-12-26|2009-05-05|Orbotech Medical Solutions Ltd.|Radiation detector circuit|
US20080277591A1|2007-05-08|2008-11-13|Orbotech Medical Solutions Ltd.|Directional radiation detector|
US20090070121A1|2007-09-11|2009-03-12|Jean-Baptiste Leonelli|System, Method And Graphical User Interface For Workflow Generation, Deployment And/Or Execution|
EP2073040A2|2007-10-31|2009-06-24|FUJIFILM Corporation|Radiation image detector and phase contrast radiation imaging apparatus|
CN101983331A|2008-03-31|2011-03-02|南方创新国际股份有限公司|Radiation imaging method with individual signal resolution|
US7495228B1|2008-03-31|2009-02-24|General Electric Company|Dual function detector device|
US8519340B2|2008-12-22|2013-08-27|Koninklijke Philips N.V.|High dynamic range light sensor|
US20100261997A1|2009-04-13|2010-10-14|Baorui Ren|System and Method for Molecular Breast Imaging with Biopsy Capability and Improved Tissue Coverage|
US7928727B2|2009-06-04|2011-04-19|Siemens Medical Solutions Usa, Inc.|Adapting acquisition time in nuclear imaging|
US8338788B2|2009-07-29|2012-12-25|Spectrum Dynamics Llc|Method and system of optimized volumetric imaging|
US9173618B2|2009-11-05|2015-11-03|General Electric Company|Diagnostic imaging system and method using multiple types of imaging detectors|
US8405038B2|2009-12-30|2013-03-26|General Electric Company|Systems and methods for providing a shared charge in pixelated image detectors|
US8552389B2|2010-10-29|2013-10-08|General Electric Company|System and method for collimation in diagnostic imaging systems|
US20130168567A1|2011-12-28|2013-07-04|General Electric Company|Collimator for a pixelated detector|
WO2013161911A1|2012-04-24|2013-10-31|株式会社東芝|Medical image diagnostic device, and pet-mri device|
US8866087B2|2012-05-29|2014-10-21|University Of Manitoba|Systems and methods for improving the quality of images in a PET scan|
US9261609B2|2012-08-20|2016-02-16|General Electric Company|Apparatus and methods for charge collection control in radiation detectors|
US9595121B2|2012-11-02|2017-03-14|General Electric Company|Systems and methods for PET penalized-likelihood image reconstruction with spatially varying smoothing parameters|
EP2989486A1|2013-04-24|2016-03-02|Koninklijke Philips N.V.|Pulse processing circuit with correction means|
US9002084B2|2013-08-30|2015-04-07|Ge Medical Systems Israel, Ltd|Systems and methods for summing signals from an imaging detector|
FR3030778B1|2014-12-22|2017-01-27|Commissariat Energie Atomique|METHOD FOR CALIBRATING A IONIZING RADIATION DETECTOR AND DEVICE THEREFOR|
US10932746B2|2015-06-30|2021-03-02|General Electric Company|Systems and methods for dynamic scanning with multi-head camera|EP3508887A1|2018-01-09|2019-07-10|Koninklijke Philips N.V.|Charge sharing calibration method and system|
US11092701B1|2020-07-07|2021-08-17|GE Precision Healthcare LLC|Systems and methods for improved medical imaging|
法律状态:
2020-07-21| PLFP| Fee payment|Year of fee payment: 2 |
2021-03-26| PLSC| Publication of the preliminary search report|Effective date: 20210326 |
2021-07-22| PLFP| Fee payment|Year of fee payment: 3 |
优先权:
申请号 | 申请日 | 专利标题
US16/102,228|US10481285B1|2018-08-13|2018-08-13|Systems and methods for determination of depth of interaction|
US16/102228|2018-08-13|
[返回顶部]